☄️

Eigenvalues and Eigenvectors

5.1 Eigenvectors and Eigenvalues

Eigenvector

Nonzero vector xx such that

Ax=λxAx=\lambda x

where λ\lambda is the eigenvalue corresponding to the eigenvector xx.

Given an eigenvalue λ\lambda, we can rearrange the above equation and solve for its corresponding eigenvector as a solution to the homogeneous system:

Axλx=0(AλI)x=0Ax-\lambda x = 0\\ (A-\lambda I)x = 0

Eigenspace

The set of all solutions xx to (AλiI)x=0(A-\lambda_i I)x = 0 is called the eigenspace of AA corresponding to an eigenvalue λi\lambda_i, which includes 0\vec 0 and all the eigenvectors xx corresponding to λi\lambda_i.

Theorem 1

The eigenvalues of a triangular matrix are the entries on its main diagonal.

Theorem 2

If v1,,vrv_1,\dots,v_r are eigenvectors that correspond to distinct eigenvalues λ1,,λr\lambda_1,\dots,\lambda_r of an n×nn\times n matrix AA, then the set {v1,,vr}\{v_1,\dots,v_r\} is linearly independent.

Intuitively, this makes sense because we can’t have two different stretching factors λ1\lambda_1 and λ2\lambda_2 along the same eigenvector xx.

Questions

Is λ=3\lambda=3 and eigenvalue of A=[122321011]A = \begin{bmatrix} 1 & 2 & 2\\ 3 & -2 & 1\\ 0 & 1 & 1 \end{bmatrix}? If so, find one corresponding eigenvector.

Starting with the definition of an eigenvector:

Ax=λxAx =\lambda x \\

We rearrange the equation

(AλI)x=0 (A-\lambda I)x = 0

Now, λ=3\lambda=3 is an eigenvalue only if the following system has a non-zero solution xx:

(A3I)x=0(A-3I)x = 0

We find a solution by row-reducing the matrix A3IA-3I

A3I=[132232310113][100010001]A - 3I = \begin{bmatrix} 1-3 & 2 & 2\\ 3 & -2-3 & 1\\ 0 & 1 & 1-3 \end{bmatrix}\\ \sim \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}

This system has only the trivial solution x=0x = \vec 0, which is not a valid eigenvector by definition. Thus, 33 is not an eigenvalue of AA.

Find a basis for the eigenspace corresponding to λ=2,5\lambda = -2,5 of the matrix A=[1432]A = \begin{bmatrix} 1 & 4\\ 3 & 2 \end{bmatrix}

The eigenspace is the set of all solutions xx to the homogeneous equation (AλiI)x=0(A-\lambda_i I)x=\vec 0. Using the given eigenvalues:

(A(2)I)x=0[3434]x=0x=[11](A-(-2)I)x=\vec 0\\ \begin{bmatrix} 3 & 4\\ 3 & 4 \end{bmatrix}x = 0 \\ \to \vec x = \begin{bmatrix} -1\\1 \end{bmatrix}

So the eigenspace corresponding to λ=2\lambda = -2 is all vectors along [11]\begin{bmatrix} -1\\1 \end{bmatrix}

Similarly, for λ=5\lambda = 5

(A5I)x=0[4433]x=0x=[11](A-5I)x=\vec 0\\ \begin{bmatrix} -4 & 4\\ 3 & -3 \end{bmatrix}x = 0 \\ \to \vec x = \begin{bmatrix} 1\\1 \end{bmatrix}

True or False: To find the eigenvalues of AA, reduce AA to echelon form.

False. Row operations change the characteristic polynomial, thus changing the eigenvalues. We find eigenvalues by solving for the roots of the characteristic polynomial det(AλI)=0\det(A-\lambda I)=0

5.2 The Characteristic Equation

The eigenvalues of matrix AA are all scalars λi\lambda_i such that the homogeneous system

(AλI)x=0(A-\lambda I)x = 0

has a nontrivial solution x0x \neq \vec 0. This problem is equivalent to finding all λi\lambda_i such that the matrix AλIA-\lambda I is not invertible (by Invertible Matrix Theorem the homogeneous system only has a nontrivial solution when the matrix is not invertible). We know that a matrix is not invertible iff:

det(AλI)=0\det{(A-\lambda I)} = 0

The above equation is called the characteristic equation. The algebraic multiplicity of an eigenvalue λi\lambda_i is its multiplicity as a root of the characteristic equation.

Theorem 4

If n×nn \times n matrices AA and BB are similar, then they have the same characteristic polynomial and hence the same eigenvalues with the same multiplicities.

Note: Similarity is not the same as row equivalence.

Questions

Find the characteristic polynomial and eigenvalues of A=[7223]A = \begin{bmatrix}7 & -2 \\ 2 & 3\end{bmatrix}

det(AλI)=07λ223λ=0(7λ)(3λ)+4=0\det{(A-\lambda I)} = 0\\ \begin{vmatrix} 7-\lambda & -2 \\ 2 & 3-\lambda \end{vmatrix} = 0\\ (7-\lambda)(3-\lambda)+4 = 0

Given the above characteristic polynomial, we solve for eigenvalues by finding the roots.

λ210λ+25=0(λ5)2=0\lambda^2 -10\lambda + 25 = 0\\ (\lambda-5)^2 = 0

We find that AA has the eigenvalue λ=5\lambda=5 with algebraic multiplicity (exponent) 22.

List the eigenvalues of AA with their multiplicities.

A=[5000840007101521]A = \begin{bmatrix} 5 & 0 & 0 & 0\\ 8 & -4 & 0 & 0\\ 0 & 7 & 1 & 0\\ 1 & -5 & 2 & 1 \end{bmatrix}

By Theorem 1, we know that the eigenvalues of a triangular matrix are the entries on its diagonal. Thus, λ=1,1,4,5\lambda = 1,1,-4,5.

Find hh in the matrix AA such that the eigenspace of λ=5\lambda=5 is 22-dimensional.

A=[526103h000540001]A = \begin{bmatrix} 5 & -2 & 6 & -1\\ 0 & 3 & h & 0\\ 0 & 0 & 5 & 4\\ 0 & 0 & 0 & 1 \end{bmatrix}

The eigenspace of an eigenvalue is the set of all solutions xx to (AλiI)x=0(A-\lambda_i I)x = 0.

[026102h000040004][x1x2x3x4]=0\begin{bmatrix} 0 & -2 & 6 & -1\\ 0 & -2 & h & 0\\ 0 & 0 & 0 & 4\\ 0 & 0 & 0 & -4 \end{bmatrix} \begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix} = 0
[013000h6000010000][x1x2x3x4]=0\sim \begin{bmatrix} 0 & 1 & -3 & 0\\ 0 & 0 & h-6 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix} = 0

If h=6h=6, then this homogeneous system has 22 free variables, thus making it 22-dimensional.

Use a property of determinants to show that AA and AA^\top have the same characteristic polynomial.

Let B=AλIB = A-\lambda I:

detB=detBdet(AλI)=det(AλI)\det{B} = \det{B^\top}\\ \det{(A-\lambda I)} = \det{(A^\top - \lambda I)}

5.3 Diagonalization

Theorem 5: The Diagonalization Theorem

An n×nn \times n matrix AA is diagonalizable iff AA has nn linearly independent eigenvectors.

A=PDP1A = PDP^{-1}

where the nn eigenvectors form the PP matrix, and DD is a diagonal matrix with the eigenvalues of AA.

Example: Diagonalize A=[133353331]A = \begin{bmatrix}1 & 3 & 3\\ -3 & -5 & -3\\ 3 & 3 &1\end{bmatrix}

First, we find the eigenvalues of AA as the roots of the characteristic polynomial:

det(AλI)=01λ3335λ3331λ=0(1λ)((5λ)(1λ)+9)3((1λ)(3)+9)+3(9+3(5+λ))=0(λ1)(λ+2)2=0λ=1,2\det{(A-\lambda I)} = 0\\ \begin{vmatrix} 1-\lambda & 3 & 3\\ -3 & -5 -\lambda & -3\\ 3 & 3 & 1-\lambda \end{vmatrix} = 0\\ (1-\lambda)((-5-\lambda)(1-\lambda)+9) -3((1-\lambda)(-3)+9) + 3(-9 +3(5+ \lambda)) = 0\\ \dots \\ -(\lambda-1)(\lambda+2)^2=0\\ \lambda = 1,-2

Then we find the eigenvalues corresponding to the two eigenvalues by plugging them into the homogeneous equation (AλI)x=0(A-\lambda I)x = 0 where the nonzero solutions xx to this homogeneous system are eigenvectors corresponding to the eigenvalue. By the Diagonalization Theorem, there must be exactly nn linearly independent eigenvectors for a matrix to be diagonalizable.

λ=1:[111]λ=2[110],[101]\lambda=1: \begin{bmatrix} 1\\-1\\1 \end{bmatrix}\\ \lambda=-2 \begin{bmatrix} -1\\1\\0 \end{bmatrix}, \begin{bmatrix} -1\\0\\1 \end{bmatrix}

Now we construct PP and DD:

P=[111110101]D=[100020002]P = \begin{bmatrix} 1 & -1 & -1\\ -1 & 1 & 0\\ 1 & 0 & 1 \end{bmatrix}\\ D = \begin{bmatrix} 1 & 0 & 0\\ 0 & -2 & 0\\ 0 & 0 & -2 \end{bmatrix}

Theorem 6

An n×nn \times n matrix with nn distinct eigenvalues is diagonalizable because it guarantees nn linearly independent eigenvectors. This is a sufficient condition for a matrix to be diagonalizable. However, it doesn’t mean that a diagonalizable matrix must have nn distinct eigenvalues.

Questions

Diagonalize A=[2341]A=\begin{bmatrix}2 & 3 \\4 & 1\end{bmatrix}

We find diagonal matrix DD such that its entries are the two eigenvalues, and a matrix PP such that its 22 columns are the 22 linearly independent eigenvectors corresponding to the eigenvalues.

We find the eigenvalues as roots to the characteristic polynomial:

det(AλI)=0(1λ)(2λ)12=0λ23λ10=0(λ5)(λ+2)=0λ=5,2\det(A-\lambda I) = 0\\ (1-\lambda)(2-\lambda)-12 = 0\\ \lambda^2 -3\lambda -10 = 0\\ (\lambda-5)(\lambda+2) = 0\\ \lambda = 5,-2\\

The matrix AA has 22 distinct eigenvalues, which is sufficient for diagonalization by Theorem 6 because it guarantees 22 linearly independent eigenvectors which guarantees that the matrix is diagonalizable by the Diagonalization Theorem.

D=[5002]D = \begin{bmatrix} 5 & 0\\ 0 & -2 \end{bmatrix}

We find the two eigenvectors as solutions to the homogeneous equations of their corresponding eigenvalues

(AλI)x=0λ=5:[3344]x=0x=[11]λ=2:[4343]x=0x=[11](A-\lambda I)x = 0\\ \lambda=5: \begin{bmatrix} -3 & 3\\ 4 & -4\end{bmatrix}x = 0 \to x = \begin{bmatrix}1\\1\end{bmatrix}\\ \lambda=-2: \begin{bmatrix} 4 & 3\\ 4 & 3\end{bmatrix}x = 0 \to x = \begin{bmatrix}-1\\1\end{bmatrix}

Diagonalize A=[046103125]A=\begin{bmatrix} 0 & -4 & -6\\ -1 & 0 & -3\\ 1 & 2 & 5 \end{bmatrix}, given λ=2,1\lambda=2,1

We’re given the eigenvalues but not their multiplicities which is a bit annoying but it’s ok. Let’s just find the eigenvectors as the solution to the homogeneous system

(AλI)x=0λ=2:[246123123]x=[210],[301]λ=1:[146113124]x=[211](A-\lambda I)x = 0\\ \lambda=2: \begin{bmatrix} -2 & -4 & -6\\ -1 & -2 & -3\\ 1 & 2 & 3 \end{bmatrix} \to x = \begin{bmatrix} -2\\1\\0 \end{bmatrix}, \begin{bmatrix} -3\\0\\1 \end{bmatrix}\\ \lambda=1: \begin{bmatrix} -1 & -4 & -6\\ -1 & -1 & -3\\ 1 & 2 & 4 \end{bmatrix} \to x = \begin{bmatrix} -2\\-1\\1 \end{bmatrix}

We have 33 linearly independent eigenvectors, thus the matrix is diagonalizable by the Diagonalization Theorem:

P=[232101011]D=[200020001]P = \begin{bmatrix} -2 & -3 & -2 \\ 1 & 0 & -1\\ 0 & 1 & 1 \end{bmatrix} \\ D = \begin{bmatrix} 2 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 1 \end{bmatrix}

True or False: If AA is diagonalizable, then AA is invertible.

False. A diagonalizable matrix may have λ=0\lambda =0 as an eigenvalue, which would make it non-invertible.

5.4 Eigenvectors and Linear Transformations

Theorem 8: Diagonal Matrix Represenation

Suppose AA is diagonalizble as A=PDP1A=PDP^{-1}. The columns of PP form a basis for Rn\R^n, thus DD is the PP-coordinate matrix [T]P[T]_P for the transformation xAxx \mapsto Ax.

Informally, this theorem says that a matrix diagonalization is expressing the matrix as a change of coordinates into the eigenspace, scaling along the axes (eigenvectors), and changing back into the original coordinate space. Where DD expresses the transformation TT in the eigenbasis PP.

Thus, we can generalize T(x)=AxT(x) = Ax to non-standard basis B\mathcal{B}:

[T(x)]B=[T]B[x]B[T(x)]_\mathcal{B} = [T]_\mathcal{B}[x]_\mathcal{B}

Proof

We have the eigenbasis P=[b1bn]P=\begin{bmatrix}b_1 & \dots & b_n\end{bmatrix}. We can show that the matrix DD is the transformation TT in the eigenbasis PP:

[T]P=D[T]_P = D

First, we remember that, in the standard basis, the columns aia_i of a linear transformation matrix AA are defined as T(ei)T(e_i):

A=[T(e1)T(en)]A = \begin{bmatrix}T(e_1)\dots T(e_n)\end{bmatrix}

We can then extend this definition beyond the standard basis, to the eigenbasis PP:

[T]P=[[T(b1)]P[T(bn)]P][T]_P = [[T(b_1)]_P \dots[T(b_n)]_P]\\

Then, again using the definition T(x)=AxT(x) = Ax:

[T]P=[[Ab1]P[Abn]P][T]_P = [[Ab_1]_P\dots [Ab_n]_P]\\

The change-of-coordinates matrix PEPP_{\mathcal{E} \leftarrow P} is just PP, so PPE=P1P_{P\leftarrow \mathcal{E} } = P^{-1}

[T]P=[P1Ab1P1Abn][T]_P = [P^{-1}Ab_1\dots P^{-1}Ab_n]

Factoring out both matrices:

[T]P=P1A[b1bn][T]P=P1AP=D[T]_P = P^{-1}A[b_1\dots b_n]\\ [T]_P = P^{-1}AP = D

The Matrix of a Linear Transformation

Given the coefficients [x]B[x]_\mathcal{B} of the basis we can find the coefficients of the transformed T(x)T(x) as :

[T(x)]B[T(x)]_\mathcal{B}:

[T(x)]B=A[x]B[T(x)]_\mathcal{B} = A[x]_\mathcal{B}

where AA contains the transformed basis vectors

A=[[T(b1)]B[T(bn)]B]A = [[T(b_1)]_\mathcal{B}\dots[T(b_n)]_\mathcal{B}]

This is just a generalization of A=[T(e1)T(en)]A = [T(e_1)…T(e_n)] in the standard basis. Notice that AA is not a change-of-coordinates matrix because both sides are coefficients of B\mathcal{B}-basis vectors.

Note: The set of all matrices similar to matrix AA is equivalent to the set of all matrix representations of the transformation xAxx \mapsto Ax.

Questions

Let T:P2P2T: P_2 \to P_2 by T(p)=p(0)p(1)t+p(2)t2T(p) = p(0) - p(1)t+p(2)t^2.

a. Show that TT is a linear transformation

First, let’s understand the problem. This linear transformation takes in a polynomial definition with a parameter tt, for example p(t)=tp(t) = t. Then, we can plug this polynomial into the transformation as

T(p)=T(t)=p(0)p(1)t+p(2)t2p(0)=0,p(1)=1,p(2)=2T(p)=T(t)=01+2=1T(p) = T(t)\\ = p(0)-p(1)t+p(2)t^2\\ p(0) = 0,p(1)=1,p(2)=2\\ \to T(p)=T(t) =0-1+2 = 1

To show that TT is a linear transformation, we show that it is covered under vector addition and scalar multiplication.

To show vector addition, we show that T(p+q)=T(p)+T(q)T(p+q) = T(p)+T(q):

T(p+q)=(p+q)(0)(p+q)t+(p+q)(2)t2=p(0)+q(0)p(1)tq(1)t+p(2)t2+q(2)t2=(p(0)p(1)t+p(2)t2)+(q(0)q(1)t+q(2)t2)=T(p)+T(q)T(p+q) = (p+q)(0)-(p+q)t + (p+q)(2)t^2\\ = p(0) + q(0) - p(1)t - q(1)t+p(2)t^2+q(2)t^2\\ = (p(0)-p(1)t+p(2)t^2)+(q(0)-q(1)t+q(2)t^2)\\ = T(p) + T(q)

To show scalar multiplication, we show that T(cp)=cT(p)T(cp) =cT(p):

T(cp)=(cp)(0)(cp)(1)t+(cp)(2)t2=c(p(0)p(1)t+p(2)t2)=cT(p)T(cp)=(cp)(0)-(cp)(1)t+(cp)(2)t^2\\ =c(p(0)-p(1)t+p(2)t^2)\\ =cT(p)

b. Find T(p)T(p) when p(t)=2+tp(t) = -2+t. Is pp an eigenvector of TT?

First, we know that T(p)T(p) has the terms p(0),p(1),p(2)p(0),p(1),p(2), so we compute them individually:

p(0)=2+0=2p(1)=2+1=1p(2)=2+2=0p(0) = -2+0 = -2\\ p(1) = -2 + 1 = -1\\ p(2) = -2 + 2 = 0

Then, we just plug these terms in:

T(p)=p(0)p(1)t+p(2)t2T(2+t)=2(1)t+0t2T(2+t)=2+tT(p) = p(0) - p(1)t+p(2)t^2\\ T(-2+t) = -2-(-1)t +0t^2\\ T(-2+t)= -2+t

Thus, we find that p(t)=2+tp(t) = -2+t is an eigenvector of this linear transformation with an eigenvalue of 11.

c. Find the matrix for TT relative to the basis {1,t,t2}\{1,t,t^2\} for P2P_2

We know that the transformation matrix of a linear transformation is given by the transformed standard basis vectors:

M=[T(e1)T(en)]M = [T(e_1)\dots T(e_n)]

Thus, using the standard basis vectors in P2P_2:

[100]=1,[010]=t,[001]=t2\begin{bmatrix} 1\\0\\0 \end{bmatrix} = 1, \begin{bmatrix} 0\\1\\0 \end{bmatrix}=t, \begin{bmatrix} 0\\0\\1 \end{bmatrix} = t^2
T(1)=11t+1t2T(t)=01t+2t2T(t2)=01t+4t2T(1) = 1-1*t+1t^2\\ T(t) = 0-1t+2t^2\\ T(t^2) = 0 -1t +4t^2
M=[100111124]M = \begin{bmatrix} 1 & 0 & 0\\ -1 & -1 & -1\\ 1 & 2 & 4 \end{bmatrix}

Let B={b1,b2,b3}\mathcal{B} = \{b_1,b_2,b_3\} be a basis for a vector space VV. Find T(2b1b2+4b3)T(2b_1-b_2+4b_3) when TT is a linear transformation from VV to VV whose matrix relative to B\mathcal{B} is

[T]B=[061051127][T]_\mathcal{B} = \begin{bmatrix} 0 & -6 & 1\\ 0 & 5 & -1\\ 1 & -2 & 7 \end{bmatrix}

By Theorem 1, [T]B[T]_\mathcal{B} is just the generalization of the transformation matrix AA to non-standard basis B\mathcal{B} such that

[T(x)]B=[T]B[x]B[T(x)]_\mathcal{B} = [T]_\mathcal{B}[x]_\mathcal{B}

Thus:

[T(2b1b2+4b3)]B=[T]B[x]B=[061051127][214][T(2b_1-b_2+4b_3)]_\mathcal{B} = [T]_\mathcal{B} [x]_\mathcal{B}\\ = \begin{bmatrix} 0 & -6 & 1\\ 0 & 5 & -1\\ 1 & -2 & 7 \end{bmatrix} \begin{bmatrix} 2\\-1\\4 \end{bmatrix}

Let T:R2R2T: \R^2\to\R^2 by T(x)=AxT(x)=Ax. Find a basis B\mathcal{B} for R2\R^2 with the property that [T]B[T]_\mathcal{B} is diagonal.

A=[5371]A = \begin{bmatrix} 5 & -3\\ -7 & 1 \end{bmatrix}

By The Diagonal Matrix Representation Theorem, if a linear transformation T:xAxT: x \mapsto Ax is defined by a diagonalizable matrix A=PDP1A=PDP^{-1}:

T:xAxT:xPDP1xT: x\mapsto Ax\\ T: x \mapsto PDP^{-1}x

Where P1P^{-1} is the change-of-coordinates matrix from the standard basis into the eigenbasis.

Thus, this problem is equivalent to diagonalizing the matrix AA where PP is the eigenbasis referred to as B\mathcal{B}.

To diagonalize AA, we first find its eigenvalues as the roots to the characteristic polynomial

det(AλI)=0(1λ)(5λ)21=0λ=2,8[T]B=D=[2008]\det(A-\lambda I)=0\\ (1-\lambda)(5-\lambda)-21 = 0\\ \to \lambda=-2,8\\ \to [T]_\mathcal{B}= D = \begin{bmatrix} -2 & 0\\0 & 8 \end{bmatrix}

We then find the eigenvectors as solutions to the homogeneous system

(AλI)x=0λ=2x=[3/71]λ=5x=[11](A-\lambda I)x=0\\ \lambda=-2\to x= \begin{bmatrix} 3/7\\1 \end{bmatrix} \\ \lambda=5\to x= \begin{bmatrix} -1\\1 \end{bmatrix}

Thus, the diagonalization:

D=[2008],P=[3/7111]D = \begin{bmatrix} -2 & 0 \\ 0 & 8 \end{bmatrix}, P = \begin{bmatrix} 3/7 & -1\\1 & 1 \end{bmatrix}

Let T:P3P3T: P_3 \to P_3 by T(p)=p(0)p(1)tp(1)t2+p(0)t3T(p)=p(0)-p(1)t-p(1)t^2+p(0)t^3

a. Find T(p)T(p) when p(t)=1+t+t2+t3p(t) = 1+t+t^2+t^3. Is pp an eigenvector of TT? If pp is an eigenvector, what is its eigenvalue?

T(p)T(p) has the terms p(0)p(0) and p(1)p(1), so let’s compute them individually:

p(0)=1p(1)=4p(0) = 1\\ p(1) = 4

Now, plug them into T(p)T(p):

T(p)=14t4t2+1t3T(p) = 1 - 4t-4t^2 + 1t^3

Obviously, this polynomial is not an eigenvector of TT.

b. Find T(p)T(p) when p(t)=t+t2p(t)=t+t^2. Is pp an eigenvector of TT? If pp is an eigenvector, what is its eigenvalue?

Similarly

T(p)=02t2t2+0=2(t+t2)T(p) = 0 - 2t-2t^2 + 0\\ = -2(t+t^2)

Thus, (t+t^2) is an eigenvector of TT with an eigenvalue of λ=2\lambda = -2

True or False: If AA is similar to BB, then A2A^2 is similar to B2B^2.

True.

A2=PB2P1P1A2P=B2A^2 = PB^2P^{-1}\\ P^{-1}A^2P = B^2

5.5 Complex Eigenvalues

A complex scalar eigenvalue λCn\lambda\in C^n satisfies det(AλI)=0\det{(A - \lambda I)} = 0 iff there is a nonzero vector xx in CnC^n such that Ax=λxAx=\lambda x.

Example: The matrix A=[0110]A=\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}rotates the plane counterclockwise and has no eigenvectors in R2\R^2.

λ2+1=0λ=±i\lambda^2 +1 = 0\\ \lambda = \pm i

Theorem 9

Let AA be a real 2×22\times 2 matrix with a complex eigenvalue λ=abi\lambda = a -bi and an associated eigenvector vC2v\in C^2, then

A=PCP1A = PCP^{-1}

where

P=[Real v,Imaginary v]C=[abba]P = [\text{Real } v, \text{Imaginary } v]\\ C = \begin{bmatrix} a & -b\\ b & a \end{bmatrix}

Scaling Factor

We compute the scaling factor rr of an imaginary eigenvalue λ=a+bi\lambda = a+bi:

r=λ=a2+b2r = |\lambda| = \sqrt{a^2+b^2}

Example: List the eigenvalues A=[3333]A = \begin{bmatrix}3&3\\-3&3\end{bmatrix}. Give the scaling factor rr and angle of rotation ϕ\phi

We find the eigenvalues as the roots of the characteristic polynomial

(3λ)2+9=0(3+3iλ)(3λ3i)=0λ=3+3i,33ir=λ=32+32=32(3-\lambda)^2+9=0\\ (3+3i-\lambda)(3-\lambda-3i)=0\\ \lambda = 3+3i,3-3i\\ \to r = |\lambda| = \sqrt{3^2+3^2} = 3\sqrt2

We then find the angle of rotation ϕ\phi using the formula:

tanϕ=ImgRealtanϕ=3/3=1ϕ=π/4\tan{\phi}=\frac{Img}{Real}\\ tan\phi = 3/3 = 1\\ \phi = \pi/4

Questions

4, 10, 16